Extended Parallelism Models For Optimization On Massively Parallel Computers

نویسنده

  • M. S. Eldred
چکیده

1. Abstract Single-level parallel optimization approaches, those in which either the simulation code executes in parallel or the optimization algorithm invokes multiple simultaneous single-processor analyses, have been investigated previously and have been shown to be effective in reducing the time required to compute optimal solutions. However, these approaches have clear performance limitations that prevent effective scaling with the thousands of processors available in massively parallel supercomputers. In more recent work, a capability has been developed for multilevel parallelism in which multiple instances of multiprocessor simulations are coordinated simultaneously. This implementation employs a master-slave approach using the Message Passing Interface (MPI) within the DAKOTA software toolkit. Mathematical analysis on achieving peak efficiency in multilevel parallelism has shown that the most effective processor partitioning scheme is the one that limits the size of multiprocessor simulations in favor of concurrent execution of multiple simulations. That is, if both coarse-grained and fine-grained parallelism can be exploited, then preference should be given to the coarse-grained parallelism. This analysis was verified in multilevel parallel computational experiments on networks of workstations (NOWs) and on the Intel TeraFLOPS massively parallel supercomputer. In current work, methods for exploiting additional coarse-grained parallelism in optimization are being investigated so that fine-grained efficiency losses can be further minimized. These activities are focusing on both algorithmic coarse-grained parallelism (multiple independent function evaluations) through the development of speculative gradient methods and concurrent iterator strategies, and on function evaluation coarse-grained parallelism (multiple separable simulations within a function evaluation) through the development of general partitioning and nested synchronization facilities. The net result is a total of four separate levels of parallelism which can minimize efficiency losses and achieve near linear scaling on massively parallel computers.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

12 A Critical Analysis of MultigridMethods on Massively

The hierarchical nature of multigrid algorithms leaves domain parallel strategies with a deeciency of parallelism as the computation moves to coarser and coarser grids. To introduce more parallelism several strategies have been designed to project the original problem space into non-interfering subspaces, allowing all grids to relax concurrently. Our objective is to understand the potential eec...

متن کامل

Multilevel Parallelism for Optimization on Mp Computers: Theory and Experiment

Parallel optimization approaches which exploit only a single type of parallelism (e.g., a single simulation instance executes in parallel or an optimization algorithm manages concurrent serial analyses) have clear performance limitations that prevent effective scaling with the thousands of processors available in massively parallel (MP) supercomputers. This motivated the development of a two-le...

متن کامل

Design of scalable optical interconnection networks for massively parallel computers

The increased amount of data handled by current information systems, coupled with the ever growing need for more processing functionality and system throughput is putting stringent demands on communication bandwidths and processing speeds. While the progress in designing high-speed processing elements has progressed significantly, the progress on designing high-performance interconnection netwo...

متن کامل

Concurrent Constraint Logic Programming On Massively Parallel SIMD Computers

With the advent of cost-eeective massively parallel computers, researchers conjecture that the future constraint logic programming system is composed of a massively parallel constraint solver as the back-end with a concurrent inference engine as the front-end Coh90]. This paper represents an attempt to build a constraint logic programming system on a massively parallel SIMD computer. A concurre...

متن کامل

Computers and Thought Award : Challenges of Massive Parallelism

Artificial Intelligence has been the field of study for exploring the principles underlying thought, and utilizing their discovery to develop use­ ful computers. Traditional AI models have been, consciously or subconsciously, optimized for available computing resources which has led AI in certain directions. The emergence of mas­ sively parallel computers liberates the way in­ telligence may be...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999